perm filename NOTES.PAA[LSP,JRA]10 blob sn#133787 filedate 1974-12-02 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00013 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	.SEC(Applications and Examples of LISP definitions,Applications and Examples)
C00010 00003	First let's write a predicate, %3member%* of two arguments, %3x%* and %3y%*.
C00025 00004	It %2is%* possible to write a directly recursive reversing function
C00028 00005	.SS(Examples of LISP applications)
C00035 00006	.SS(Differentiation,differentiation,P41:)
C00041 00007	Now we can complete the differentiation algorithm.  We know:
C00046 00008	.<<abstracting from rep>>
C00055 00009	.SS(Algebra of polynomials)
C00068 00010	.SS(Evaluation of polynomials,,P2:)
C00072 00011	.<<PROBLEMS ON POLY EVAL AND THE GREAT MOTHERS>>
C00074 00012	.SS(Another Respite)
C00082 00013	.<<PROVING PROPERTIES>>
C00088 ENDMK
C⊗;
.SEC(Applications and Examples of LISP definitions,Applications and Examples)
.BEGIN INDENT 20,20;
"...All the time I design programs for nonexiting machines and add:
`if we now had a machine comprising the primitives here assumed, then the job is
done.'

 ... In actual practice, of course, this ideal machine will turn out not to exist, so
our next task --structurally similar to the original one-- is to program the
simulation of the "upper" machine.... But this bunch of programs is written
for a machine that in all probability will not exist, so our next job will be
to simulate it in terms of programs for a next lower level machine, etc.,
until finally we have a program that can be executed by our hardware...."
.END
.BEGIN TURN ON "→";

→E. W. Dijkstra, %3Notes on Structured Programming%1
.END

.BEGIN TURN ON "←";
.SS(Examples of Definitions)
We have already seen a few examples of definition and evaluation of
non-primitive LISP functions.  This section will spend a good deal of
time showing different styles of definition, giving hints about how to
write LISP functions, and increase your familiarity with LISP. Finally,
 several complete and non-trivial examples will be  described as LISP
functions.

The style of definition available in our current subset of LISP is
%2⊗→definition by recursion↔←%*. That is, a typical LISP definition
will contain applications of that function name being defined.
Consider the factorial function, n!. We might characterize this function
by saying:

%21.%*  The function is defined for non-negative integers.

%22.%*  The value of the function for 0 is 1.

%23%*.  Otherwise the value of n! is n times the value of n-1!.

The implication here is that it is somehow easier to compute n-1! than
to compute n!. 

Recall also our example of %3equal%* on {yon(P1)}. The question
of equality for two non-atomic sexprs was thrown back to the
question of equality for their %3car%*s and %3cdr%*s. 

These examples are typical of LISP definitions.
The body of the definiton is a conditional expression
first involving a few special cases, called %2⊗→termination conditions↔←%*.
Then, the remainder of the conditional covers the general case-- what to
do if the argument to the function is not one of the special cases.
The termination case for the factorial should be: "is the argument 0?"
⊗↓What we obviously means is "is the %2value%* of the argument %30%*?".
Unless we desire precision, we will frequently misuse the language this way←;
termination for %3equal%* involves: "are either of the arguments atomic?".

It should now be clear how to write a LISP program for the factorial function:

.BEGIN CENTERIT;SELECT 3;
.P20:
←fact[n] <= [eq[n;0] → 1; T → times[n;fact[n-1]]] ⊗↓%3times%1 is assumed
to be a LISP function  which performs multiplication.  %3n-1%* should actually
be written: %3sub1[n]%*, where the function %3sub1%* does what you think it does.←

.END
Notice that %3fact%* is a partial function, defined only for non-negative
integer. %3equal%* is a total predicate; it is defined for all arguments.

When writing or reading LISP definitions pay particular attention to the
domain of definition. The following general hints should be useful:

%21%*.  Is it a function or predicate?

%22%*.  Are there any restrictions on the argument positions?

%23%*.  Are the termination conditions compatible with the restrictions
on the arguments?  If it is a recursion on lists, check for the empty
list; if it is a recursion of arbitrary sexprs, then check for the
appearance of an atom.

%24%*.  Whenever a function call is made within the definition are all
the restrictions on that function satisfied?

%25%*.  Don't try to do too much. Be lazy. There is usually a very simple
termination case. 
If the termination case looks messy there is probably something wrong with your
conception of the program.
If the gereral case looks messy, then write some  subfunctions
to perform the brunt of the calculation. 

Use the above  suggestions when writing any subfunction. When you are
finished, no function will do very much, but the net effect of all the
functions acting in concert is a solution to your problem. That is
part of the mystery of recursive programming.


Here are some more examples of LISP definitions.


First let's write a predicate, %3member%* of two arguments, %3x%* and %3y%*.
%3x%* is to be atomic; %3y%* is to be a list; %3member%* is to return
%3T%* just in the case that %3x%* is  an element of the list %3y%*.

So the predicate is partial, the recursion should be on the structure of %3y%*,
and termination (with value %3NIL%*) should occur if %3y%* is the empty list. If %3y%* is not
empty then it has a first element, call it %3z%*; compare %3z%* with %3x%*.
If these elements are identical then  %3member%* should return %3T%*; otherwise
see if %3x%* occurs in the remainder of the list %3y%*.

Notes:

%21.%*  We cannot use %3eq%* to check equality since, though %3x%* is atomic, there is no
reason to believe that the elements of %3y%* need be atomic.

%22.%*  Recall that we can get the first element of a list with 
%3first%* or %3car%*, 
and the rest of a list with %3rest%* or %3cdr%*.

So here's %3member%*:
.BEGIN TABIT2(16,32);SELECT 3;GROUP;
\member[x;y] <=\[null[y] → NIL;
\\ equal[car[y];x] → T;
\\ T → member[x;cdr[y]]]

.END

Let us now consider a slightly more complicated arithmetical example, the 
⊗→fibonacci sequence↔←: 0, 1, 1, 2, 3, 5, 8, ... . This sequence is frequently
characterized by the following recurrence relation:

.BEGIN CENTERIT;
.GROUP
←f(0) = 0
←f(1) = 1
←f(n) = f(n-1)+f(n-2);
.APART
.END

.GROUP
The translation to a LISP function is easy:

.BEGIN TABIT2(16,26);SELECT 3;GROUP;
\fib[n] <=\[eq[n;0] → 0;
\\ eq[n;1] → 1;
\\ T → plus[fib[n-1];fib[n-2]]]
.END

where %3plus%* is a representation of the mathematical function, %3+%*.
.APART

A few points can be made here. First, notice that the intuitive evaluation
scheme requires many duplications of computation.  For example,
computation of %3fib[5]%* requires the computation of %3fib[4]%* and %3fib[3]%*.
But within the calculation of %3fib[4]%* we must again calculate %3fib[3]%*,
etc.  It would be nice if we could restructure the definition of the function,
%3fib%* to stop this duplication of calculation. Since we %2do%* wish to run
programs on a machine we should give some attention to efficiency.
To those with programming experience, the solution is easy: assign the
partial computations to temporary variables.  The problem here is that
our current subset of LISP doesn't contain assignment. There is however
a very useful trick which we can use.

We will define another function, called %3fib%8/%1, on three variables %3x%*,
%3y%*, and %3n%*. The variables, %3x%* and %3y%*, will be used to carry
the partial computations. Consider:

.BEGIN TABIT2(17,34);SELECT 3;CENTERIT;
.GROUP
←fib%41%*[n] <= fib%8/%*[n;0;1]

\fib%8/%*[n;x;y] <=\[eq[n;0] → x;
\\ T → fib%8/%*[n-1;plus[x;y];x]]
.APART
.END

This example is complicated enough to warrant examination. The initial call,
%3fib%41%*[n]%1, has the effect of calling %3fib%8/%1 with %3x%* initialized to
%30%* and with %3y%* initialized to %31%*. The calls on %3fib%8/%1 within the body
of the definition, say the i%8th%* such recursive call, has the effect
of saving the i%8th%* fibonacci number in %3x%* and the i-1%8st%* in %3y%*.
For example:

.BEGIN TABIT2(25,37);SELECT 3;GROUP;
\fib%41%*[4] \= fib%8/%*[4;0;1]
\\= fib%8/%*[3;1;0]
\\= fib%8/%*[2;1;1]
\\= fib%8/%*[1;2;1]
\\= fib%8/%*[0;3;2]
\\= 3
.END

This same trick of using ⊗→auxiliary function↔←s can be applied to the
factorial example. When viewed computationally, the resulting definition 
will be more efficient, though
the gain in efficiency is not as apparent  as that in the fibonacci
example ⊗↓The %3fib%41%1 example improves efficiency mostly 
by calculating fewer intermediate results.
The gain in the %3fact%41%1 example is involved with the machinery
necessary to actually execute the program: the run-time
environment if you wish. We will discuss this when we talk about
implementation of LISP in {yonsec(P107)}. The whole question of what is
efficient?" difficult to discuss.←. Thus:

.BEGIN TABIT1(10);SELECT 3;GROUP;CENTERIT;
.P21:
←fact%41%*[n] <= fac%8/%*[n;1];

←fac%8/%*[n;x] <= [eq[n;0] → x; T → fac%8/%*[n-1;times[n;x]]]
.APART
.END

It is clear in these examples that the functions %3fact, fact%41%1 and
%3fib, fib%41%1 are equivalent. Perhaps we should prove that this is so.
We shall examine the question of proofs of equivalence in {yonss(P15)}.

The trick of auxiliary functions is clearly applicable to LISP functions
defined over sexprs:

.BEGIN CENTERIT;SELECT 3;group;
.P19:
←length[n] <= [null[n] → 0; T → plus[1;length[cdr[n]]]]

←length%41%*[n] <= length%8/%*[n;0]

←length%8/%*[n;x] <= [null[n] → x; T → length%8/%*[cdr[n];plus[x;1]]]
%1and it seems apparent that:
%3←length[n] ≡ length%41%*[n]
.APART
.END

So far our examples have either been numerical or have been predicates.
Predicates only require traversing existing sexprs; certainly  we will
want to write algorithms to build new sexprs.  Consider:

.BEGIN CENTERIT;SELECT 3;GROUP;
.P97:

←reverse[x] <= rev%8/%*[x;NIL]

←rev%8/%*[x;y] <= [null[x] → y; T → rev%8/%*[cdr[x];cons[car[x];y]]]
.APART
.END

The function, ⊗→%3reverse%*↔←, builds a list which is the reverse of its
argument.  Notice that this definition uses an auxiliary function.  
.P16:
Sometimes it is more natural to express algorithms this way. We will
see a "direct" definition of the reversing function in a moment.

This %3reverse%* function builds up the new list in a very straightforward mannner, %3cons%*ing
the elements onto the second argument of %3rev%8/%*%1. Since %3y%* was initialized
to %3NIL%* we are assured that the resulting construct will be a list.

Construction is usually not quite so straightforward. Suppose we wish to
define a LISP function named ⊗→%3append%*↔← of two list arguments, %3x%* and %3y%*,
which is to return a new list which has %3x%* appended onto the front of %3y%*.
For example:

.BEGIN CENTERIT;SELECT 3; GROUP;

←append[(A B D);(C E)] = (A B D C E)
←append[A;(B C)] %1 is undefined%*. %3A%1 is not a list.
←%3append[(A B C);NIL] = append[NIL;(A B C)] = (A B C)
.APART
.END

So %3append%* is a partial function. It should be defined by recursion,
but recursion on which argument?  Well, if either argument is %3NIL%*
then the value given by %3append%* is the other argument. 
The next simplest case is a one-element
list; if exactly one of %3x%* or %3y%* is a singleton how does that help us
discover the recurrence relation for appending? It doesn't help much if %3y%*
is a singleton; but if %3x%* is, then %3append%* could give:

←%3cons[car[x];y]%* as result.

So recursion on %3x%* is likely. The definition follows easily now.

.P22:
←%3append[x;y] <= [null[x] → y; T → cons[car[x];append[cdr[x];y]]].%*

Notice that the construction of the result is a bit more obscure than
that involved in %3reverse%*. The construction has to "wait" until
we have seen the end of the list %3x%*. For example:

.BEGIN TABIT2(10,40);SELECT 3;GROUP;
\append[(A B C);(D E F)] \= cons[A;append[(B C);(D E F)]]
\\= cons[A;cons[B;append[(C);(D E F)]]]
\\= cons[A;cons[B;cons[C;append[NIL;(D E F)]]]]
\\= cons[A;cons[B;cons[C;(D E F)]]]
\\= cons[A;cons[B;(C D E F)]]
\\= cons[A;(B C D E F)]
\\= (A B C D E F)
.APART
.END

We are assured  of constructing a list here because %3y%* is a list
and we are %3cons%*ing onto the front of it. LISP functions
which are to construct list-output by %3cons%*ing %2must%*  %3cons%* onto  
the front of an existing %2list%*. That list may be either
non-empty or the empty list, %3NIL%*. 
This is why the termination condition on a list-constructing function,
such as the following %3dotem%*,
returns %3NIL%*.

The arguments to %3dotem%* are both lists assumed to contain the same
number of elements. The value returned is to be a list of dotted pairs; the 
elements of the pairs are the corresponding elements of the input lists. Thus:

.BEGIN SELECT 3;TABIT1(16);

dotem[x;y] <= [\null[x] → NIL;
\T → cons[cons[car[x];car[y]];dotem[cdr[x];cdr[y]]]]]

.END
Look at a computation as simple as %3dotem[(A);(B)]%*. This will involve

.BEGIN CENTERIT;SELECT 3;
←cons[cons[A;B];dotem[NIL;NIL]]

%1Now the evaluation of %3dotem[NIL;NIL]%1 returns our %3NIL%*, giving

←%3cons[cons[A;B];NIL] = cons[(A . B];NIL] = ((A . B))
.END

If the termination condition of %3dotem%* retruned anything other than
%3NIL%* then the list-construction would "get off on the wrong foot"
and would not generate a true list.

Now, as promised on {yon(P16)}, here is a "direct" definition of %3reverse%*.

.BEGIN SELECT 3;TABIT1(13);GROUP;

.P23:
reverse[x] <=\[null[x] → NIL;
\ T → append[reverse[cdr[x]];cons[car[x];NIL]]] 
.END

This reversing function is not as efficient as the previous one.
Within the construction of the reversed list the %3append%* function
is called repeatedly. You should evaluate something like %3reverse[(A B C D)]%*
to see the gross inefficiency.
.END

It %2is%* possible to write a directly recursive reversing function
with no auxiliary functions, no functions other than the primitives, and
no efficiency. We shall do so simply because it is a good example of
the process of discovering the general case of the recursion by careful
consideration of examples. Let us call the function %3rev%*.

Let's worry about the termination conditions later. Consider, for example,
%3rev[(A#B#C#D)]%*. This should evaluate to %3(D#C#B#A)%*. How can we
construct this list by recursive calls on %3rev%*? 
In the following, assume %3x%* is bound to %3(A#B#C#D)%*.
Now note that %3(D#C#B#A)%* is the
value of %3cons[D;(C#B#A)]%*. Then %3D%* is %3car[rev[cdr[x]]]%* (it is also
%3car[rev[x]]%* but that would not help us).
How can we get %3(C#B#A)%*?## Well:
.BEGIN TABIT2(21,30);GROUP;SELECT 3;

\(C B A)\= rev[(A B C)]
\\= rev[cons[A;(B C)]]   %1(we are going after %3cdr[x]%* again)%3
\\                        %1but first  we can get %3A%* from %3x.
\\= rev[cons[car[x];(B C)]]
\\= rev[cons[car[x];rev[(C B)]]]
\\= rev[cons[car[x];rev[cdr[(D C B)]]]]
\\= rev[cons[car[x];rev[cdr[rev[cdr[x]]]]]]
.END

.BEGIN SELECT 3;CENTERIT;

%1Finally%*
←rev[x] = cons[car[rev[cdr[x]]];rev[cons[car[x];rev[cdr[rev[cdr[x]]]]]]]
.END

%1
The termination conditions are simple. First %3rev[NIL]%* gives %3NIL%*.
Then notice that the general case which we just constructed has %2two%*
%3cons%*es.  That means the shortest list which it can make is of length two.
So lists of length one are handled separately: the reverse of such a list
is itself.
Thus the complete definition should be:

.BEGIN SELECT 3;GROUP;TABIT1(13);

rev[x] <= [\null[x] → NIL;
\null[cdr[x]] → x;
\T → cons[car[rev[cdr[x]]];rev[cons[car[x];rev[cdr[rev[cdr[x]]]]]]]]
.END

.REQUIRE "PROB5.PUB" SOURCE_FILE;

.SS(Examples of LISP applications)
.BEGIN TURN ON "←";

The next few sections will examine some non-trivial problems
involving  computations on data structures.  We will describe
the problem intuitively, pick an initial representation for the
problem, write the LISP algorithm, and in some cases "tune" the
algorithm by picking "more efficient" data representations.

The examples share other important characteristics: 

%21.%*  We examine the problem domain and attempt to represent its elements 
data structures.

%22.%*  We reflect on our (intuitive) algorithm and try to express it 
as a LISP data-structure manipulating function.


%23.%*  While performing %21%* and %22%*, we might have to modify some of
our decisions. Something  assumed to be structure might better be represented
as algorithm, or the converse might be the case.

%24.%*  When the  decisions are made, we evaluate the LISP function on a 
representation of a problem.

%25.%*  We reinterpret the data-structure output as an answer to our problem.

Pictorially in terms of LISP:

.BEGIN GROUP;TABIT1(35);
.P46:

intuitive algorithm => LISP function\|
\|  evaluation
\|==============> interpret sexpr output as answer
\|
problem domain      => sexpressions\|

.END

Whenever we write computer programs, and whatever language we use,
we always go through a similar representation problem. 
The process is more apparent in a higher-level language like 
FORTRAN, ALGOL or most noticeable in a language like LISP which
primarily deals with data structures.

When we deal with numerical algorithms, the representation problem
has usually be settled in the transformation from real-world situation
to a numerical problem. One has to think more explicitly about
representation when we deal with structures like arrays or matrices. 
We are encoding
our information in the array. But the preceding diagram %2is%* 
occuring within the machine, even for strictly non-structured
numerical calculation.

.BEGIN GROUP;TABIT1(40);

numerical algorithm => machine instructions\|
\|  execution
\|==========> interpret binary number as answer
\|
numbers => binary representation\|

.END

The encodings are done by the input routines. The result of the execution is
presented to the external world by the output routines.

However, it is not until we come to data-structure computations, or
non-numerical computations, that the representation problem really
becomes undeniable.  This is partially do to our lack of intuition
or preconception  about such computations. We have to think more about what 
we are doing. More importantly, however, we are trying to represent
actual problems %2directly%* as machine problems. We do not attempt to
first analyze them into a complex mathematical theory, but should
try to express our intuitive theory directly as data-structure computation.
This is a different kind of thinking, due wholy to the advent of 
computers.  Indeed the field of computation as expanded so much
as to obsolete the term, "computer". "Structure processor" is more
indicative of the poper level at which we should view "computers".

We have already seen a simple example of the representation  problem
in the discussion of list-notation beginning in {yonss(P100)}.


.BEGIN GROUP;TABIT1(35);

list algorithm => LISP function\|
\|  evaluation
\|==============> re-interpret Sexpr-result as list-output.
\|
list expression => S-expression\|

.END

The following sections deal with the representation problem as applied
to LISP. However it will be %2us%* who will be doing the representation
of the problem domain in terms of LISP functions and data-structures,
and it will be %2us%* who will have to reinterpret the resultant
data-structure. See {yonss(P105)} for a partial solution to the 
"input/output" transformation problem.
.SS(Differentiation,differentiation,P41:)

This example will describe a rudimentary differentiation
routine for polynomials in several variables.  First you should
realize that the normal definition of differentiation is
recursive!  The question of differentiation of a sum is thrown
back on the differentiation of each summand.  Similar relationships 
hold for products, differences, and powers.  As with all
good recursive definitions, there must be some termination
conditions.  What are the termination conditions here?  
Differentiation of a variable, say %3x%*, with respect to %3x%* is defined to be 1;
differentiating a constant, or a variable not equal to %3x%* with
respect to %3x%* gives a result of zero.  

It is easy to write recursive
algorithms in LISP; the only problem is that the domain (and
range) of LISP functions is S-exprs, not the polynomials which
we need.  So we need a way to represent arbitrary polynomials as S-exprs.
Though polynomials can be arbitrarily complex, involving plus,
times, minus, powers, etc. their general format is very simple
if they are described in prefix notation (standard 
 functional notation) and we assume that +, *, - and ** are
binary operations.
For example:
.GROUP SKIP 1;
.BEGIN TABIT2(30,45);
.GROUP
\%2infix\prefix
%3

\x*z+2y\+[*[x,z], *[2,y]]
\x*y*z\*[x,*[y,z]]
%1
.APART
.END

How does this help us?  We still don't have S-exprs.  First,
the operations need to be represented as atoms:

.BEGIN TABIT2(30,45);group;
\%2operation\representation
%3

\+\PLUS
\-\MINUS
\*\TIMES
\**\EXPT
%1
.END
How about a representation of variables and constants?  Let
a variable be mapped to its uppercase counterpart, which is a LISP atom.
Let constants (numerals), be just the numerals;
also respectable LISP atoms.  Looking ahead to the algorithm, the termination 
condition on variables and constants can then just be
given in terms of the predicate, %3atom%*.

.GROUP
That is, if %3u%* is a constant or a variable then:

.BEGIN TABIT2(10,20);
\D%3u%*/D%3x%* = 1\if %3x = u
\\%10 otherwise
.APART

.GROUP
will translate to part of a binary LISP function named %3diff%*:

\%3diff[u;x] <= [atom[u] → [eq[u;x] → 1; T → 0] ....%*
.END
.APART

Now finally, how can we represent an arbitrary polynomial as an S-expr?

Write:
.BEGIN TABIT2(30,45);
%1
\+[x,y]\as %3(PLUS X Y)%*
\*[x,y]\as %3(TIMES X Y)%*
\**[x,y]\as %3(EXPT X Y)%*
%1
or in general:
%3
←f[t%41%*,t%42%*]    %1as %3(F%*, representation of %3t%41%1, representation of %3t%42%3)
%1

.GROUP
For example:
%3

←x%82%* + 2yz + u
%1
.APART
will be translated to the following prefix notation:

%3
←+[**[x,2], +[*[2,*[y,z]], u]]     ⊗↓%1This is messier than it really needs to 
be because we assume that + and * are binary.%*←  
.FILL
%1

From this it's easy to get the S-expr
form:
.NOFILL
%3
←(PLUS (EXPT X 2)  (PLUS (TIMES 2 (TIMES Y Z)) U))
%1
.FILL
    



Now we can complete the differentiation algorithm.  We know:

%1
.BEGIN TABIT3(10,28,39);
\D[%3f + g%*]/D%3x%* = D%3f/%*D%3x + %*D%3g%*/D%3x.
%1

We would see %3u = f + g%* as %3u = (PLUS, %*rep of %3f%*, rep of %3g)%*
Where:\%3cadr[u] = %*rep of %3f
\caddr[u] = %*rep of %3g%*.        ⊗↓We have done a reasonably evil thing here.
We have tied the algorithm for symbolic differentiation to a specific 
representation for polynomials. It is useful to use a specific representation
for the moment and repent later. In particular, see {yon(P74)}, {yon(P60)}
and {yon(P73)}.←


.FILL
The result of differentiation %3u%* is to be a new list of three
elements:
.NOFILL

\1.  the symbol %3PLUS%*.
\2.  the effect of %3diff%* operating on the rep. of %3f%*.
\3.  the effect of %3diff%* operating on the rep. of %3g%*.

Thus another part of the algorithm:

%3
\eq [car[u];PLUS] →\list [PLUS; diff[cadr[u];x];diff[caddr[u];x]]
%1.

D[%3f - g]/%*D%3x%* is very similar.

D[%3f*g]%*/D%3x%* is defined to be %3f* %1D%3g%*/D%3x + g *%1D%*f/%1D%*x%1.


So here's another part of %3diff%*:
.GROUP
%3
\eq[car[u];TIMES] →\list[PLUS; 
\\     list[TIMES; cadr[u];diff [caddr[u];x]];
\\     list[TIMES;caddr[u];diff [cadr[u];x]]]
%1
.APART

Here`s an example. We know:

←D[%3x*y + x]%*/D%3x = y + 1%*

.END
.GROUP
Try:
%3
.BEGIN TABIT3(10,17,23);
diff [(PLUS (TIMES X Y) X); X]
\=list [PLUS; diff[TIMES X Y) X]; diff [X;X]]
\=list [PLUS; 
\\list [PLUS; 
\\\list [TIMES; X; diff [Y;X]];
\\\list [TIMES; Y; diff [X;X]]];
\\diff [X;X]]

\=list [PLUS;
\\list [PLUS;
\\\list [TIMES; X ;0];
\\\list [TIMES; Y;1]];
\\1 ]

.APART
\=(PLUS  (PLUS (TIMES X 0) (TIMES Y 1)) 1)

.END
%1
which can be interpreted as:

%3
←x*0 + y*1 + 1 .
%1

Now it is clear that we have the right answer; it is equally clear that
the representation leaves much to be desired. There are obvious
simplifications which we would expect to have done before we would
consider this output acceptable. This example is a
particularly simply case for algebraic simplification. We can easily
write a LISP program to perform simplifications like those expected here:
like replacing %30*x%1 by %30%*, and %3x*1%* by %3x%*. But the general
problem of writing simplifiers, or indeed of recognizing 
what is a simplification, is quite difficult.
A whole branch of Computer Science has grown up around 
symbolic and algebraic manipulation of expressions. One of the
crucial parts of such an endeavor is a sophisticated simplifier.

.END
.<<abstracting from rep>>
.CENT(Points to note)

This problem of representation is typical of data structure
algorithms (regardless of what language you use).  That is,
once you have decided what the intuitive problem is, pick a
representation which makes your algorithms clean (then worry
about efficiency).  In the next set of examples we will see a
 series of representation each becoming more and more "efficient"
and each requiring more "knowledge" being built into the algorithm.

Here's the whole algorithm for differentiation using + and *.
%3
.BEGIN TABIT3(10,35,41);
.GROUP

diff[u;x] <=
\[atom [u] → [eq [x,u] → 1; T → 0];
\ eq [car [u]; PLUS] → list\[PLUS; 
\\ diff [cadr [u]; x];
\\ diff [caddr [u]; x]]
\ eq [car [u]; TIMES] → list\[PLUS; 
\\ list\[TIMES; 
\\\ cadr [u];
\\\ diff [caddr [u]; x]];
\\ list\[TIMES; 
\\\ caddr [u];
\\\ diff [cadr [u]; x]];
\ T → LOSER]
.APART
.END
.select 1;
.P74:

As we mentioned earlier, the current manifestation of %3diff%* encodes too
much of our particular representation for polynomials. The separation
of algorithm from representation is beneficial from at least two standpoints.
First, changing representation should have a minimal effect on the structure
of the algorithm. %3diff%* %2knows%* that variables are represented as atoms
and a sum is repesented as a list whose %3car%*-part is %3PLUS%*.
Second, readability of the algorithm suffers greatly.

How much of %3diff%* really needs to know about the representation and
how can we improve the readability of %3diff%*?

First the %3car-cdr%* chains are not particularly mnemonic. We used
%3cadr%* to get the first argument to a sum or product and used %3caddr%*
to get the second.
We used %3car%* to extract the operator.

Let's define the selectors:

.BEGIN CENTERIT;SELECT 3;group;
←op[x] <= car[x]
←arg%41%*[x] <= cadr[x]
←arg%42%*[x] <= caddr[x]
.end

Then %3diff%* becomes:

.BEGIN TABIT3(10,35,41);select 3;
.GROUP

diff[u;x] <=
\[atom [u] → [eq [x,u] → 1; T → 0];
\ eq [op[u]; PLUS] → list\[PLUS; 
\\ diff [arg%41%* [u]; x];
\\ diff [arg%42%* [u]; x]]
\ eq [op[u]; TIMES] → list\[PLUS; 
\\ list\[TIMES; 
\\\ arg%41%* [u];
\\\ diff [arg%42%* [u]; x]];
\\ list\[TIMES; 
\\\ arg%42%* [u];
\\\ diff [arg%41%* [u]; x]];
\ T → LOSER]
.APART
.END

Still, there is much of the representation present. Recognition of variables
and other terms can be abstracted from the representation. We need only
recognize when a term is a sum, a product, a variable or a constant.
In terms of the current  representation we could define such recognizers or
predicates as:

.BEGIN CENTERIT; SELECT 3;group;

←issum[x] <= eq[op[x];PLUS]
←isprod[x] <= eq[op[x];TIMES]
←isconst[x] <= numberp[x]
←isvar[x] <= [atom[x] → not[isconst[x]]; T → NIL]
←samevar[x;y] <= eq[x;y]

.END
Using these predicates we can rewrite %3diff%* as:

.BEGIN TABIT3(10,27,33);select 3;
.GROUP

diff[u;x] <=
\[isvar[u] → [samevar[x,u] → 1; T → 0];
\ isconst[u] → 0;
\ issum[u] → list\[PLUS; 
\\ diff [arg%41%* [u]; x];
\\ diff [arg%42%* [u]; x]]
\ isprod[u] → list\[PLUS; 
\\ list\[TIMES; 
\\\ arg%41%* [u];
\\\ diff [arg%42%* [u]; x]];
\\ list\[TIMES; 
\\\ arg%42%* [u];
\\\ diff [arg%41%* [u]; x]];
\ T → LOSER]
.APART
.END

Readability is certainly improving, but the representation is still
known to %3diff%*.
When we build the result of the sum or product of derivatives we use
knowledge of the representation.
Rather it would be better to define:
.BEGIN CENTERIT;SELECT 3;group;

←makesum[x;y] <= list[PLUS;x;y]
←makeprod[x;y] <= list[TIMES;x;y]
.END
Then the new %3diff%* is:

.BEGIN TABIT2(10,31);select 3;
.GROUP
.p101:

diff[u;x] <=
\[isvar[u] → [samevar[x,u] → 1; T → 0];
\ isconst[u] → 0;
\ issum[u] → makesum[diff [arg%41%* [u]; x];diff[arg%42%* [u]; x]]
\ isprod[u] → makesum[\makeprod[arg%41%* [u];diff[arg%42%* [u]; x]];
\\ makeprod[arg%42%* [u];diff [arg%41%* [u]; x]];
\ T → LOSER]
.APART
.END

Notice that %3diff%* is much more readable now and more importantly, the details
of the representation have been relegated to subfunctions. Changing
representation simply requires supplying different subfunctions. No changes
need be made to %3diff%*. There has only been a slight decrease in efficiency.
The termination condition in the original %3diff%* is a bit more succinct.
Looking back, we first abstracted the selector 
functions, those which selected components; then we abstracted the recognizers,
the predicates telling which term was present; then we modified
the constructors, the functions which make new terms.
These three components of programming: selectors, recognizers, and constructors,
will appear again on {yon( P75)} in discussing McCarthy's abstract syntax.

Indeed, this terminology is not new to you. It appeared in conjunction
with list notation ({yonss(P100)}); selectors of %3first%* and %3rest%*,
recognizers of %3islist%* and %3null%* and %3concat%* is a constructor.

The notions were introduced in the discussion of the
basic LISP operations: the selectors are %3car%* and %3cdr%*, the constructor
is %3cons%* and the recognizer is %3atom%*.

Finally notice, that our abstraction process has masked the order-dependence
of conditional expressions. Exactly one of the recognizers will be satisfied
by the form, %3u%*.



.SS(Algebra of polynomials)
.BEGIN TURN ON "#";

%1
Assume we want to perform addition and multiplication
of polynomials and assume that each polynomial is of the
form %3p%41%* + p%42%* + ... + p%4n%1 where each %3p%4i%1 is a product of variables
and constants.  The constant part of each %3p%4i%1 is called the coefficient
and the variable part is called the exponent.
We shall assume without loss of generality that the 
variables which appear in the polynomials are lexicographically
ordered, e.g.#%3x#<#y#<#z,#..., %1and assume that the exponent parts
obey that ordering; thus we would insist that %3xz%82%3y%1 be written %3xyz%82%1.
We further assume that the exponents of
each %3p%4i%1 be distinct and that no %3p%4i%1 have %30%* as coefficient.
The standard algorithm for addition of
%9S%8n%4i=1%3p%4i%1 with   %9S%8m%4j=1%3q%4j%1 says you can combine a
%3q%4j%1 with a %3p%4i%1 if the exponent parts of these terms 
are identical.  In this
case the resulting term has the same exponent  but has a
coefficient equal to the sum of the coefficients of %3p%4i%1
and %3q%4j%1.
For example if %3p%4i%1 is %32x%* and %3q%4j%1 is %33x%* the sum term is %35x%*.
We will examine four representations of polynomials, before finally
writing any algorithms. To aid in the discussion we will use
the polynomial %3x%82%* - 2y - z%1 as our canonical example.

.CENT(First representation:)

We could use the representation of the differentiation
example.  
This would represent our example as:

.BEGIN CENTERIT;SELECT 3;
←(PLUS (TIMES 1(EXPT X 2)) (PLUS (TIMES -2 Y) (TIMES -1 Z)))
.END

Though this representation is sufficient, it is more complex than necessary.

.CENT(Second representation:)
We are really only interested in testing the equality of the exponents;
we will not be manipulating them in any other way.
So we might simply represent the exponent as a list of pairs;
each pair contains a variable name and the coresponding value of the exponent.
Using our knowledge of the forms of polynomials and the class
of algorithms we which to implement, we write
%9S %3p%4i%1 as:

←%3( (%1rep of %3p%41%*), (%1rep of %3p%42%*) ...)%1

which would make of example look like:

.BEGIN CENTERIT;SELECT 3;

←((TIMES 1((X . 2))), (TIMES -2 ((Y . 1))),(TIMES -1 ((Z . 1))))
.END

Is this representation sufficient?  Does it have the 
flexibility we need?  It  %2does%* suffice but it is still not terribly 
efficient. We are ignoring too much of the structure in our class of 
polynomials.

.CENT(Third representation:)

What do we know? We know that the occurrence of variables is 
ordered in each exponent part; we can assume that we know the class of 
variables which
may appear in any polynomial. So instead of writing %3x%82%*y%83%*z%1
as

←%3((X . 2) (Y . 3) (Z . 1))%*, we could write:


←%3(2 3 1)%*   assuming %3x, y, z%* are the only variables.

In a further simplification, notice that the  %3TIMES%* in the
representation is superfluous. We %2always%* multiply the coefficient by
the exponent part. So we could simply %3cons%* the coefficient 
onto the front of the exponent representation.

Let`s stop for some examples.
.BEGIN NOFILL;TURN ON "\";TABS 30,45;

\%2term\representation
\%32xyz\(2 1 1 1)
\2x%82%*z\(2 2 0 1)
\4z%83%*\(4 0 0 3)
.END

Thus our canonical polynomial  would now be represented as:

.BEGIN CENTERIT;SELECT 3;
←((1 2 0 0)(-2 0 1 0)(-1 0 0 1))
.END

This representation is not too bad; the %3first%*-part of any
term is the coefficient; the %3rest%*-part is the exponent. So,
for example the test for equality of exponents is simply a call on %3equal%*.

Now let`s start thinking about the structure of the main algorithm.

.CENT(Fourth representation:)

The algorithm for the sum must compare terms; finding like terms it will
generate an appropriate new term, otherwise simply copy the terms.
When we pick a %3p%4i%1 from the first polynomial we would
like to find a corresponding %3q%4j%1 with the minimum amount of
searching.  
This can be accomplished if we can order the terms
in the polynomials. 
A natural ordering can be induced on  the terms  by
ordering  the numerical representation of the exponents.
For sake of argument, assume 
that a maximum of two digits will be needed to express
the exponent of any one variable. 
Thus the exponent of %3x%82%1 will be represented as %302%*, or 
the exponent of %3z%810%1 will be represented as %310%*. Combining this with our
ordered representation of exponent parts, we arrive at:
.GROUP
.BEGIN NOFILL;TABS 30,45;TURN ON "\";

\%2term\representation

.SELECT 3;
\43x%82%*y%83%*z%84\%3(2, 020304)
\2x%82%*z\%3(2, 020001)
\4z%83%*\(4, 000003)
.END
.APART
%1

Now we can order on the numeric representation of the exponent
part of the term.  One more simplification:

←represent %3 ax%8A%**y%8B%**z%8C%1 as %3(a . ABC).

.SELECT 1;
This gives our final representation.

.BEGIN CENTERIT;SELECT 3;
←((1 . 20000)(-2 . 100)(-1 . 1))
.END

Note that %320000 > 100 > 1%1.
  
Finally we will write the algorithm.  We will assume that
the polynomials are initially ordered and will write the 
algorithm so as to maintain that ordering.
Each term is a dotted pair of elements: the coefficient
and a representation of the exponent.

.P60:
As in the previous differentiation example, we should 
attempt to extract the algorithm
from the representation.
We shall define:

←%3coef[x] <= car[x]%1 and %3 expo[x] <= cdr[x].
%1

To test the ordering we will use the LISP predicate:

←%3greaterp[x;y] = x>y.
%1

In the construction of the `sum'
polynomial we will generate new terms by combining coefficients.
So a constructor named %3node%* is needed.
In terms of the latest representation %3node%* is defined as:

←%3node[x;y] <= cons[x;y].%1

.GROUP
So here's a graphical representation of our favorite polynomial:

←%3x%82%* - 2y - z %1

%3
.BEGIN NOFILL;




  1   20000


         -2    100


                  -1      1       NIL

.APART

.GROUP
%1
Here's the algorithm:
%3
.BEGIN NOFILL;TABS 3,10,35;TURN ON "\";
polyadd[p;q] <=
\[null[p] →q; null[q] → p;
\ eq[expo[car[p]];expo[car[q]]] →\[zerop[plus[coef[car[p]];coef[car[q]]]]
\\\    → polyadd[cdr[p];cdr[q]]];
\\\ T → cons[node[plus[coef[car[p]];coef[car[q]]];
\\\             expo[car[p]]];polyadd[cdr[p];cdr[q]]]];
\ greaterp[expo[car[p]];expo[car[q]]] → cons[car[p];polyadd[cdr[p];q]];
\ T → cons[car[q];polyadd[p;cdr[q]]]]
.END
%1
.APART

.GROUP
Now for an explanation and example:

First we used some new LISP functions:

←%3plus[x;y] = x + y
←zerop[x]  <= eq[x;0]
.APART
%1
.P72:

%3polyadd%* is of the form: %3[p%41%* → e%41%*; p%42%* → e%42%*; p%43%* → e%43%*; p%44%* → e%44%*; p%45%* → e%45%*]

.BEGIN SELECT 1;INDENT 0,10;FILL;
Case i: %3p%41%3 → e%41%1 and %3p%42%* → e%42%1. If either polynomial is empty return the other.

Case ii: %3p%43%* → e%43%1.  If the exponents are equal then we can think about combining
terms.  However, we must check for cancellations and not include any 
such terms in our resultant polynomial.

Case iii: %3p%44%* → e%44%1 and %3p%45%* → e%45%1. Theses sections worry about the ordering of
terms so that the resultant polynomial retains the ordering.
.END

%1
Here's an informal execution of %3polyadd:
.GROUP

.BEGIN NOFILL;TABS 10;TURN ON "\";
polyadd[x+y+z;x%82%*-2y-z]
\= cons[x%82%*;polyadd[x+y+z;-2y-z]]
\= cons[x%82%*;cons[x;polyadd[y+z;-2y-z]]]
\= cons[x%82%*;cons[x;cons[node[1+-2;y];polyadd[z;-z]]]]
\= cons[x%82%*;cons[x;cons[-y;polyadd[z;-z]]]]
\= cons[x%82%*;cons[x;cons[-y;polyadd[NIL;NIL]]]]
\= cons[x%82%*;cons[x;cons[-y;NIL]]]
←= x%82%*+x-y
.END
.APART
.END
.END
.END
.SS(Evaluation of polynomials,,P2:)
.BEGIN TURN ON "←";
Given an arbitrary polynomial, and values for any of the variables which it
contains, we would like to compute its value. For this section we will
assume that  the substitutions of values for variables has already been
carried out. Thus we are dealing with polynomials of the form: %9S%8n%4i=1%3p%4i%1
where %3p%4i%1 is a product of powers of constants. For example:

←%32%83%* + 3*4%82%* + 5.%1  This could be represented as:

←%3(PLUS (EXPT 2 3)(PLUS (TIMES 3(EXPT 4 2)) 5)).%*

We will now describe a LISP function, %3value%*, which will take such an
sexpr representation and compute its value. First, input to %3value%* will be
numbers or lists beginning with either %3PLUS, TIMES,%* or %3EXPT%*.
The value of a number is that number; to evaluate the other forms of input
we should perform the operation represented. We must therefore assume
that operations of plus, times and exponentiation exits.  Assume they are
named +, *, and ↑, respectively. What then should be the value of a list whose %3car%*-part
is %3PLUS%*? It should be the result of adding the value of the %3cadr%* of the
list to the value of the %3caddr%* of the list. That is %3value%* is  clearly
recursive.
  To test for the occurrence of a number we shall assume
a unary LISP predicate called %3numberp%* which returns %3T%* just in the case
that its argument is a number.

It should now be clear how to write %3value%*:

.BEGIN TABIT1(10);SELECT 3;
value[x] <=\[numberp[x] → x;
\ eq[car[x] PLUS] → +[value[cadr[x]];value[caddr[x]]];
\ eq[car[x] TIMES] → *[value[cadr[x]];value[caddr[x]]];
\ eq[car[x] EXPT] → ↑[value[cadr[x]];value[caddr[x]]]]

.END
Or more abstractly:

.BEGIN TABIT1(10);SELECT 3;
value[x] <=\[isnumber[x] → x;
\ issum[x] → +[value[arg%41%*[x]];value[arg%42%*[x]]];
\ isprod[x] → *[value[arg%41%*[x]];value[arg%42%*[x]]];
\ isexpt[x] → ↑[value[arg%41%*[x]];value[arg%42%*[x]]]]

.END
.END

.<<PROBLEMS ON POLY EVAL AND THE GREAT MOTHERS>>
.SELECT 1;
.REQUIRE "PROB6.PUB"SOURCE_FILE;
.SS(Another Respite)

We have again reached a point where a certain amount of reflection would be
good for the soul.  
Though this is not a programming manual we would be remiss if we did not
attempt to analyze the style which we have tried to exercise when writing
programs. 

1. Write the algorithm in an abstract setting; do not muddle the abstract
algorithm with the chosen representation.  If you follow this dictum your
LISP programs will never use %3car, cdr, cons%1, and %3atom%*.
All instances of these LISP primitives will be banished to small subfunctions
which manipulate representations. 

2. When writing the abstract program, do not be afraid to  cast off
difficult parts of the implementation to subfunctions. Remember that if
you have trouble keeping the details in mind when %2writing%* the program,
then the confusion involved in %2reading%* the program at some later time
will be overwhelming. Once you have convinced yourself of the
correctness of the current composition, then worry about the construction
of the subfunctions. Seldom does the process of composing a program flow
so gently from top-level to specific representation. 
Only the toy programs are easy, the construction of the practical
program will be confusing, and will require much rethinking.
But bring as much structure as you can to the process.

3. From the other side of the question, don't be afraid to look at
specific implementations, or specific data-structure representations before
you begin to write. There is something quite conforting about a "real"
data structure. Essentially data structures are static objects
⊗↓At least within the program presently being constructed.←, while
programs are dynamic objects. A close look at a possible representation
may get you a starting point and as you write the program it will become
clear where you are depending on the specific representation and when
you are just using properties of an abstract data structure.

Perhaps the more practical reader is ovecome by the inefficiencies inherent
in these proposals. Two answers; first, "inefficiency" is a very relative concept.
Hardware and software development has been such that last year's "inefficiency"
is this year's %3passe%*  programming trick. 
But even at a more topical level, much of what seems inefficent can now be
straightened out by a compiler (see {yonsec(P107)}). 
Frequently compilers can do very clever optimizations to generate efficent code.
It is better to leave the cleverness to the compiler, and the clarity to the 
programmer.

Second, the problems in programming are not those of efficiency. They are
problems of %2correctness%*. How do you write a program which works!
Until practical  tools are developed for proving correctness it is up
to the programmer certify his programs. Any methodology which  can
aid the programmer will be most welcome.
Clearly, the closer you can write the program to your intuition, the
less chance there is for error. This was one of the reasons for developing
high-level languages. But do not forget that the original motivation for
such languages was a convenient notation for expressing numerical problems.
That is, writing programs to express ideas which have already had their
juices extracted as mathematical formulas.
With data structures, we are attempting to enter the formalization process 
earlier, expressing our ideas as data structure manipulations rather
than as numerical relationships.

What kinds of errors are prevelant in data structure programming? At least
two kinds: errors of omission#--#misunderstanding of the basic algorithm; and
errors of commission#--#errors due to misapplied cleverness in attempting
to be efficient.

Errors of omission can be mollified by  presenting the user with
programming constructs which are close to the intuited algorithm.
This involves control structures, data structures, and function representations.

Errors of commission are easier to legislate against and comprise
the great majority of the present day headaches. It is here that programming
%2style%* can be  beneficial: keep the representation distinct from the
abstract algorithm; write concise programs, passing off responsibilities to
subfunctions. Whenever a definition of "structured programming" is arrived at,
this advice on programming style should be included.

Before closing this discussion on the philosphy of LISP programming, we
can't but note that the preceding section, %2The Great Mothers%*, has 
completed ignored our good advice. This would be a good time for the
interested reader to abstract the %3tgmoaf%* algorithm from the
particular data representation. This detective work will be most
rewarding.

.CENT(Problems)

I. Write an abstract version of %3tgmoaf%*.
.<<PROVING PROPERTIES>>
.NEXT PAGE;
.SS(Proving properties of programs,,P15:)
.BEGIN TURN ON "←","#";
People are becoming increasingly aware of the importance of giving 
convincing
arguments for such things as the correctness or equivalence of programs. These are
both very difficult enterprises. We will only sketch a proof
of a simple property of two programs and leave others as problems for the 
interested reader.


***more***


Using the definition of %3append%* given on {yon(P22)} and the definition
of %3reverse%* given on {yon(P23)}, we wish to show that:

←%3append[reverse[y];reverse[x]] = reverse[append[x;y]]%*,

for any lists, %3x%*, and %3y%*. The induction will be on the structure of %3x%*.

.BEGIN TABIT3(5,10,15);
\|%2Basis%*: %3x%* is %3NIL%*.
\|We must thus show: %3append[reverse[y];NIL] = reverse[append[NIL;y]]%*
\|But: %3reverse[append[NIL;y]] = reverse[y]%*  by the def. of %3append%*.
\|We now establish the stronger result:  %3append[z;NIL] = z%*
\\|%2Basis:%* %3z%* is %3NIL%*.
\\|Show %3append[NIL;NIL] = NIL%*. Easy.
\\|%2Induction step:%* Assume the lemma for lists, %3z%*, of length n;
\\|Prove: %3append[cons[x;z];NIL] = cons[x;z]%*.
\\|Since %3cons[...]%* is not %3NIL%*, then applying the definition of %3append%*
\\|says we must prove: %3cons[x;append[z;NIL]] = cons[x;z]%*.
\\|But an application of the induction hypothesis gives our result.
\|So the Basis for our main result is established.
\|%2Induction step:%* Assume the result for lists, %3z%*, of length n;
\|Prove:
\|←   (1)   %3append[reverse[y];reverse[cons[x;z]]] = reverse[append[cons[x;z];y]%*
\| Using the definition of %3reverse%* on the LHS of (1) gives:
\|←   (2)   %3append[reverse[y];append[reverse[z];list[x]]]%*.
\| Using the definition of %3append%* on the RHS of (1) gives:
\|←   (3)   %3reverse[cons[x;append[z;y]].%*
\| Using the definition of %3reverse%* on (3) gives:
\|←   (4)   %3append[reverse[append[z;y];list[x]]].%*
\| Using our induction hypothesis on (4) gives:
\|←   (5)   %3append[append[reverse[y];reverse[z]];list[x]]%*
\| Thus we must establish that    (2) = (5).
\| But this is just an instance of the associativity of %3append%*:
\|←%3append[x;append[y;z]] = append[append[x;y];z].%*  (See problem I, below.)
.END

The structure of the proof is analogous to proofs by mathematical 
induction in elementary number theory. The ability to perform such proofs
is a direct consequence of our careful definition of data structures.
Examination of the proof will show that there is a close relationship
between what we are inducting on in the proof and what  we are recurring on
during the evaluation of the  expressions. A program
written by Boyer and Moore has been reasonably successful in generating
proofs like the above by exploiting this relationship.

.CENT(Problems)

I. Prove the associativity of %3append%*.

II Analysis of the above proof shows frequent use of other results for LISP
functions. Fill in the details. Investigate the possibility of formalizing
this proof, showing what axioms are needed.

III Show the equivalence of %3fact%* ({yon(P20)}) and %3fact%41%1 ({yon(P21)}).

IV Show the equivalence of %3length%* and %3length%41%1 ({yon(P19)}).

V  Using the definition of %3reverse%*, given on {yon(P16)}, prove:

←%3reverse[reverse[x]] = x%*.

VI  Prove that the construction %3atom[x] → eq[x;y]%* is equivalent to
%3atom[x]#→#[eq[x;y]#→T;#T#→#NIL]%*. See {yon(P24)}.
.END